309 research outputs found

    A general framework for positioning, evaluating and selecting the new generation of development tools.

    Get PDF
    This paper focuses on the evaluation and positioning of a new generation of development tools containing subtools (report generators, browsers, debuggers, GUI-builders, ...) and programming languages that are designed to work together and have a common graphical user interface and are therefore called environments. Several trends in IT have led to a pluriform range of developments tools that can be classified in numerous categories. Examples are: object-oriented tools, GUI-tools, upper- and lower CASE-tools, client/server tools and 4GL environments. This classification does not sufficiently cover the tools subject in this paper for the simple reason that only one criterion is used to distinguish them. Modern visual development environments often fit in several categories because to a certain extent, several criteria can be applied to evaluate them. In this study, we will offer a broad classification scheme with which tools can be positioned and which can be refined through further research.

    A branch and bound algorithm to optimize the representation of tabular decision processe.

    Get PDF
    Decision situations have various aspects: knowledge acquisition and structuring, knowledge representation, knowledge validation and decision making. It has been recognized in literature that decision tables can play an important role in each of these stages. It is however not necessary to use only one representation formalism during the whole life cycle of an intelligent system. Likewise it is possible that different formats of the same formalism serve different purposes in the development process.Important in this respect is the search for automated and, if possible, optimized transitions between different formats of a formalism and between various formalisms. In this paper a branch and bound algorithm is presented that transforms expanded decision tables, that, because of their explicit enumeration of all decision cases primarily serve an acquisition and verification function, into optimized contracted decision tables, primarily used as target representation of a decision process. An optimal contracted decision table is a contracted decision table with a condition order which results in the minimum number of contracted decision columns.

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    On the decomposition of tabular knowledge systems.

    Get PDF
    Recently there has been a growing interest in the decomposition of knowledge based systems and decision tables. Much work in this area has adopted an informal approach. In this paper, we first formalize the notion of decomposition, and then we study some interesting classes of decompositions. The proposed classification can be used to formulate design goals to master the decomposition of large decision tables into smaller components. Importantly, carrying out a decomposition eliminates redundant information from the knowledge base, thereby taking away -right from the beginning- a possible source of inconsistency. This, in turn, renders subsequent verification and validation more smoothly.Knowledge; Systems;

    Designing compliant business processes with obligations and permissions. Business process management workshops.

    Get PDF
    The sequence and timing constraints on the activities in business processes are an important aspect of business process compliance. To date, these constraints are most often implicitly transcribed into control-flow-based process models. This implicit representation of constraints, however, complicates the verification, validation and reuse in business process design. In this paper, we investigate the use of temporal deontic assignments on activities as a means to declaratively capture the control-flow semantics that reside in business regulations and business policies. In particular, we introduce PENELOPE, a language to express temporal rules about the obligations and permissions in a business interaction, and an algorithm to generate compliant sequence-flow-based process models that can be used in business process design.

    Unified patterns to transform business rules into an event coordination mechanism.

    Get PDF
    Business rules define and constrain various aspects of the business, such as vocabulary, behavior and organizational issues. Enforcing the rules of the business in information systems is however not straightforward, because different mechanisms exist for the (semi-)automatic transformation of various business constraints and rules. In this paper, we examine if and how business rules, not only data rules, but also process rules, timing rules, authorization rules, etc., can be expressed in SBVR and translated using patterns into a more uniform event mechanism, such that the event handling could provide an integrated enforcement of business rules of many kinds.Business rules; Event coordination; Business processes; SBVR; Declarative process modeling;

    Verification and validation of knowledge-based systems with an example from site selection.

    Get PDF
    In this paper, the verification and validation of Knowledge-Based Systems (KBS) using decision tables (DTs) is one of the central issues. It is illustrated using real-market data taken from industrial site selection problems.One of the main problems of KBS is that often there remain a lot of anomalies after the knowledge has been elicited. As a consequence, the quality of the KBS will degrade. This evaluation consists mainly of two parts: verification and validation (V&V). To make a distinction between verification and validation, the following phrase is regularly used: Verification deals with 'building the system right', while validation involves 'building the right system'. In the context of DTs, it has been claimed from the early years of DT research onwards that DTs are very suited for V&V purposes. Therefore, it will be explained how V&V of the modelled knowledge can be performed. In this respect, use is made of stated response modelling designs techniques to select decision rules from a DT. Our approach is illustrated using a case-study dealing with the locational problem of a (petro)chemical company in a port environment. The KBS developed has been named Matisse, which is an acronym of Matching Algorithm, a Technique for Industrial Site Selection and Evaluation.Selection; Systems;

    Knowledge integration in information systems education through an (inter)active platform of analysis and modelling case studies..

    Get PDF
    In this paper we discuss how knowledge integration through-out system analysis, modelling and development courses can be stimulated by giving an overview of our MIRO-project at K.U.Leuven. This includes offering an online knowledge base of all-embracing case studies, structured according to the Zachman framework. Supported by collaborative groupware, students not only get the opportunity to consult and compare solutions for the case studies, but also actively discuss and contribute to alternative solutions. In this Problem Based Learning (PBL)-context, students are able to influence and understand the development of a certain process through interactive computerized animations and demos.cooperative information systems; information systems education; implementing collaborative groupware; digital libraries; knowledge integration;

    Using rule extraction to improve the comprehensibility of predictive models.

    Get PDF
    Whereas newer machine learning techniques, like artifficial neural net-works and support vector machines, have shown superior performance in various benchmarking studies, the application of these techniques remains largely restricted to research environments. A more widespread adoption of these techniques is foiled by their lack of explanation capability which is required in some application areas, like medical diagnosis or credit scoring. To overcome this restriction, various algorithms have been proposed to extract a meaningful description of the underlying `blackbox' models. These algorithms' dual goal is to mimic the behavior of the black box as closely as possible while at the same time they have to ensure that the extracted description is maximally comprehensible. In this research report, we first develop a formal definition of`rule extraction and comment on the inherent trade-off between accuracy and comprehensibility. Afterwards, we develop a taxonomy by which rule extraction algorithms can be classiffied and discuss some criteria by which these algorithms can be evaluated. Finally, an in-depth review of the most important algorithms is given.This report is concluded by pointing out some general shortcomings of existing techniques and opportunities for future research.Models; Model; Algorithms; Criteria; Opportunities; Research; Learning; Neural networks; Networks; Performance; Benchmarking; Studies; Area; Credit; Credit scoring; Behavior; Time;

    Post-processing of association rules.

    Get PDF
    In this paper, we situate and motivate the need for a post-processing phase to the association rule mining algorithm when plugged into the knowledge discovery in databases process. Major research effort has already been devoted to optimising the initially proposed mining algorithms. When it comes to effectively extrapolating the most interesting knowledge nuggets from the standard output of these algorithms, one is faced with an extreme challenge, since it is not uncommon to be confronted with a vast amount of association rules after running the algorithms. The sheer multitude of generated rules often clouds the perception of the interpreters. Rightful assessment of the usefulness of the generated output introduces the need to effectively deal with different forms of data redundancy and data being plainly uninteresting. In order to do so, we will give a tentative overview of some of the main post-processing tasks, taking into account the efforts that have already been reported in the literature.
    corecore